perm filename ELEPHA[S89,JMC]12 blob sn#883175 filedate 1990-03-22 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00039 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00004 00002	%elepha[s89,jmc]		Elephant 2000
C00005 00003	\noindent Abstract: Elephant 2000 is a vehicle for some ideas about
C00009 00004	\section{Introduction}
C00024 00005	\section{Speech Acts and Abstract Performatives}
C00032 00006	\section{Referring to the Past}
C00040 00007	\section{The Structure of Elephant 2000 Programs}
C00047 00008	\section{Examples of Programs}
C00055 00009	\section{Implementation}
C00074 00010	\section{Verification of Elephant 2000 Programs}
C00075 00011	\section{Levels of Intentionality}
C00086 00012	\section{Connections with Artificial Intelligence}
C00088 00013	\section{Domain of Application}
C00095 00014	\section{Algol 50}
C00109 00015	\section{Elephant Programs as Sentences of Logic}
C00122 00016	\section{Remarks}
C00126 00017	\section{Related Work}
C00127 00018	\section{Acknowledgements}
C00128 00019	\section{References}
C00132 00020	\smallskip\centerline{Copyright \copyright\ 1989\ by John McCarthy}
C00133 00021	leftovers
C00142 00022	\section{What is an Airplane Reservation?}
C00146 00023	\section{Additional Features}
C00152 00024	\section{Speech Acts aka Performatives}
C00159 00025	\noindent Notes:
C00164 00026	\section{Introduction}
C00170 00027	\section{Philosphers' Discussion of Performatives}
C00176 00028	old introduction:
C00182 00029	\subsection{Promises}
C00183 00030	ideas:
C00191 00031	\section{Elephant Programs as Sentences of Logic}
C00194 00032	% temporary only Elephant programs in logic
C00198 00033	Notes on Searle and Austin and Grice
C00201 00034	Leora suggestions:
C00202 00035	 \section{What Good is Elephant?}
C00205 00036	Oct 17 notes
C00208 00037	\section{Levels of Intentionality}
C00210 00038	Requests for copies
C00212 00039	tasks:
C00219 ENDMK
C⊗;
%elepha[s89,jmc]		Elephant 2000
\input memo.tex[let,jmc]
\centerline{INCOMPLETE PRELIMINARY DRAFT of \jmcdate}
\bigskip
\rightline{\vbox{\it
  \hbox{I meant what I said, and I said what I meant.}
  \hbox{An elephant's faithful, one hundred percent!}
  \smallskip
  \hbox{\rm moreover,}
  \smallskip
  \hbox{An elephant never forgets!}}}
\bigskip
\title{Elephant 2000: A Programming Language Based on Speech Acts}


\medskip
\noindent Abstract: Elephant 2000 is a vehicle for some ideas about
programming language features.  We expect these features to be
valuable in writing and verifying programs that interact with
people (e.g. transaction processing) or interact with programs belonging
to other organizations (e.g. electronic data interchange)
\hfill\break 1. Communication inputs and outputs are in an I-O
language whose sentences are meaningful speech acts approximately
in the sense of philosophers and linguists.  These include
questions, answers, offers, acceptances, declinations, requests,
permissions and promises.
\hfill\break 2. The correctness of programs is partially defined in
terms of proper performance of the speech acts.  Answers should
be truthful, and promises should be kept.  Sentences of logic expressing
these forms of correctness can be generated automatically
from the form of the program.
\hfill\break 3. Elephant source programs may not need data
structures, because they can refer directly to the past.  Thus a
program can say that an airline passenger has a reservation if he
has made one and hasn't cancelled it.
\hfill\break 4. Elephant programs themselves will be represented as
sentences of logic.  Their properties follow from this
representation without an intervening theory of programming or
anything like Hoare axioms.
\hfill\break 5. Elephant programs that interact non-trivially with
the outside world can have both {\it illocutionary} and {\it perlocutionary}
specifications, i.e. behavioral specifications relating inputs and outputs,
and specifications concerning what they accomplish in the world.

The present draft is very incomplete, and not all the above
features are presented or even worked out.  However, most of the goals
are explained, and the author welcomes help and advice in achieving them.
Since the final version will probably be different in many ways, readers
with a casual interest should skip this draft.
\eject
\section{Introduction}

	This paper proposes several new features for programming
languages.  As a vehicle we introduce Elephant 2000, a language
that would embody them.  In the present draft, these features are
not fully embodied in the program examples.

	1. Elephant programs communicate with people and other
programs in sentences of the Elephant I-O language which have
meanings partially determined by the Elephant language and
partially implicit in the program itself.  Thus Elephant input
and output includes and the I-O language distinguishes {\it
requests}, {\it questions}, {\it offers}, {\it acceptances of
offers}, {\it permissions} as well as {\it answers to questions}
and other assertions of fact.  Its outputs also include
statements of {\it commitment} analogous to promises.  Some
communications between parts of a program may also usefully be
treated as speech acts.

	Non-communication observations and actions
may have meanings defined in the Elephant language.

	2. Some of the conditions for correctness of an Elephant
program are defined in terms of the meaning of the inputs and
outputs.  We can ask whether an Elephant program fulfilled a
request, answered a question truthfully, accepted
an offer or fulfilled a commitment. We can also ask whether the
program has authority to do what it does.  We call such
correctness conditions {\it intrinsic}, because the text of the
program determines them.  Expressing these intrinsic correctness
conditions as sentences of logic requires a formal theory of what
it means to fullfil a commitment, etc.  This theory doesn't have
to correspond exactly to human behavior; we only need analogs
useful for program correctness.  
%
For example, when a program
``promises'' someone to do something, it needn't believe
that fulfillment of the promise will do its recipient some
good.  Indeed some programs that promise might not have any
beliefs at all.
%
We expect to be able to generate the intrinsic correctness sentences
 automatically from the text of the program.  For example,
given the text of the program that designates certain outputs
as answers to questions, there is a logical sentence corresponding
to the assertion that the answers are truthful.

	3. Elephant programs do not require data structures,
because program statements can refer directly to past events and
states.  An Elephant interpreter keeps a history list of past
events, and an Elephant compiler constructs whatever data
structures in the object language are needed to remember the
information needed for compiled program to behave as specified in
the Elephant source program.  It isn't yet clear whether it will be
convenient to omit data structures completely from the language.

	4. An Elephant program is itself a logical sentence
(or perhaps a syntactic sugaring of a logical sentence).  The
correctness properties of the program are logical consequences
of this sentence and a theory of the domain in which the
program acts.  Thus no logic of programs is needed.  Any sugaring
should be reversible by a statement-by-statement syntactic
transformation.

	5. Requests, permissions and promises such as those we
want Elephant programs to perform are called {\it speech acts} by
philosophers and linguists.  Their idea is that certain sentences
don't have only a declarative significance but are primarily
actions.  A paradigmatic example is a promise, whose utterance
creates an {\it obligation} to fulfill it and is therefore not
merely a statement of intention to do something.  For many
purposes, we can bypass the the philosophical complexities of
{\it obligation} by considering only whether a program {\it does}
fulfill its promises, not worrying about whether it is obliged
to.

	Using the customary terminology, some of the outputs of
Elephant programs are {\it performative} sentences, commonly
referred to just as {\it performatives}.  Indeed Elephant 2000
started with the idea of making programs use performatives.
However, as the ideas developed, it seemed useful to deviate from
the notions of speech act discussed by J. L. Austin (1962) and
John Searle (1969).  Thinking about speech acts from the {\it
design standpoint} of Daniel Dennett (1971) leads to a view of
them different from the conventional one.  We now refer to {\it
abstract performatives} which include purely internal actions
such as {\it commitments} not necessarily expressed in output,
but on whose fulfillment the correctness of the program depends.
Taking the design stance in the concrete way needed to allow
programs to use speech acts leads to new views on the
philosophical problems that speech acts pose.

	Notice that we don't need to apply moral terms like {\it
honest}, {\it obedient} or {\it faithful} to the program, and we
won't in this paper.  However, we can incorporate whatever abstract
analogs of these notions we find useful.  The philosophical
investigations have resulted in ideas useful for our purposes.
This is partly because programs belonging to one organization
that interact with those belonging to other organizations
will have to perform what amounts to speech acts, and the
specifications of these programs that have to be verified
often correspond to what Austin calls the {\it happy} performance
of the speech acts.

	(McCarthy 1979) discusses conditions under which
computer programs may be ascribed beliefs, desires, intentions
and other mental qualities.  It turns out that some specifications
require ascription of beliefs, etc. for their proper statement,
and others do not.

	This article is exploratory, and we are not yet prepared
to argue that every up-to-date programming language of the year
2000 will include {\it abstract performatives}.  We hope that
programs using performatives will be easier to write, understand,
debug, modify and (above all) verify.
 Having a standard vocabulary of requests,
commitments, etc. will help.

	Allowing direct reference to the
past may also permit easier modification, because the program can
refer to past events, e.g. inputs and outputs, directly rather
than via data structures whose design has to be studied and
understood.  Since referring directly to past events is
characteristic of natural languages, we expect it to prove useful
in programming languages.

	6. The theory of speech acts distinguishes between
 {\it illocutionary}
 acts, such as telling someone something, and
 {\it perlocutionary}
 acts, such as convincing him of it.
The distinction between illocutionary and perlocutionary
can be applied to speech inputs as well as outputs.  Thus there
is an input distinction between {\it hearing that} and {\it learning that}
analogous to the output
distinction between telling and convincing.

	Procedures for human execution often specify perlocutionary
acts.  For example, a teacher might be told, ``Have your students
take the test on Wednesday''.  However, including
perlocutionary acts in programs is appropriate only if the
program has good enough resources for accomplishing goals to make
it reasonable to put the goal in the program rather than actions
that the programmer believes will achieve it.  One would
therefore expect perlocutionary statements mainly in programs
exhibiting intentionality, e.g. having beliefs, and involving
some level of artificial intelligence.

	Even without putting perlocutionary acts in the program itself, it is
worthwhile to consider both illocutionary and
perlocutionary {\it specifications}
 for programs.
For example, an air traffic control
program may be specified in terms of the relations between its
inputs and its outputs.  This is an illocutionary specification.
Its verification involves only the semantics of the programming
language.
  However, our ultimate goal is to specify and verify that
the program prevents airplanes from
colliding, and this is a perlocutionary specification.  Proving that
a program meets perlocutionary specifications must be based on
assumptions about the world, the information it makes available
to the program and the effects of the program's actions as well
on facts about the program itself.

	It will usually be
worthwhile to formulate both illocutionary and
perlocutionary specifications for the same
program and to relate them.  Thus
an argument based on an axiomatic theory of the relevant aspects of
 the world may be used to
show that a program meeting certain illocutionary specifications
will also meet certain perlocutionary specifications.
The problem of justifying such axiomatic theories is like that of
justifying other formalized theories of the physical world in
applied mathematics.

	Anyone who doesn't like the terms {\it illocutionary} and
{\it perlocutionary} is invited to refer to {\it behavioral}
or {\it input-output}
specifications and {\it accomplishment} specifications.

	7. The most obvious applications of Elephant are in programs
that do transaction processing and refer to databases in more
general ways than just answering queries and making updates.
Abstract performatives will also be important for programs
involved in business communication with programs belonging
to other organizations.  (McCarthy 1982) describes a ``Common
Business Communication Language''.
\section{Speech Acts and Abstract Performatives}

	Philsophers and linguists mostly treat speech acts
naturalistically, i.e. they ask what kinds of speech acts people
perform.  Even when they intend to approach the problem abstractly,
the analyses seem to me to be too close to human behavior.
  Our approach is to ask what kinds of speech act are
useful in interactive situations.  The nature of the interaction
arises from the fact that the different agents have different
goals, knowledge and capabilities, and an agent's achieving its
goals requires interaction with others.  The nature of the
required interactions determines the speech acts required.  Many
facts about what speech acts are required are independent of
whether the agent is man or machine.

	Often our programs can perform successfully with a small
repertoire of speech acts, event though these speech acts don't have
all the properties John Searle (1969) and other philosophers have
ascribed to human speech acts.

	Here are some of the kinds of speech acts we plan to provide
for.

	1. Assertions.  One correctness condition is that the
assertions be truthful.  Of course, proving them true is based on
axioms about the domain in which the program operates.  Another
correctness condition is that the assertions be sincere.  This
requires a theory of the beliefs of the program.  If proving the
truth or sincerity of certain kinds of statements is too
difficult, the programmer may choose to omit such correctness
conditions and ask the user to rely on his intuitions.

	2. Questions.  The user can question the program, and the
program can question the user.

	3. Answers to questions.  These are assertions and should
be truthful and sincere.  However, they should also be responsive
to the questions.  If the question calls for a yes or no answer,
responsiveness is easier to formulate than otherwise.  However,
there may still be difficulties of the sort discussed by Grice in
the papers collected in (Grice 1989).  For some questions, the
appropriate answer is ``I don't know''.  Other questions make
assumptions that the questionee may know to be false, and
a literal answer may be inappropriate.

	Suppose someone asks for a person's telephone number.
Then it is necesary to be able to require that a responsive
answer be a sequence of digits.

	The assertions and questions involve certain predicate
and function symbols common to the user and the program.  These
make up the {\it I-O language} of the program and are to be
combined using the tools of first order logic.  Natural language
front ends might be provided if found helpful.  The I-O language
of a particular program is described in the manual for the
program.  However, certain predicates and functions are common to
all Elephant 2000 programs, whereas many others are close enough to
ordinary language words to be usable without study.

	3. Commitments and promises.  We consider some promises
as made up of two parts---an internal commitment and the
statement of the commitment.  An internal commitment is a purely
internal action by a program.  Its operand is an act that the
program can perform (usually including something about when the
action must be done in its statement).  A program that makes
commitments is correct only
if it fulfills them, but additional conditions
will usually be imposed.
  %%Puzzling Leora comment.

	{\it Simple promises} may be defined as internal
commitments to do something, and assertions that the commitment
exists.  Correctness requires that the assertion be truthful.

	However, Austin (1962) points out that uttering a promise
creates an obligation in a public sense.  This may be important
also for computer programs, since a promise by the program may
create an obligation, perhaps legal, on the part of the organization
operating the program.  Sometimes, it will be convenient to
settle for specifications in terms of fulfilling internal
commitments, and sometimes the public character of promises
will have to be taken into account in the specifications.

	Internal commitments constitute one kind of {\it abstract
performative}; presumably there are others.  It is abstract in that its
content is independent of how it is expressed, and it need not
be externally expressed at all.

	We also propose to handle external obligations in an abstract
way.  Suppose a notions of type 1 and type 2 obligations are introduced.
We specify that a reservation program incurs a type 1 obligation
when it makes a reservation and imposes a type 2 obligation on the
organization that operates it.  What type 1 and type 2 obligations
are, e.g. the legal requirements they impose, how they are to be
treated when the conflict with other considerations,  and the consequences
of their non-fulfillment, is subject to institutional definition.
\section{Referring to the Past}

	Algolic programs refer to the past via variables, arrays
and other data structures.  Elephant 2000 programs can refer to
the past directly.  This is not such a great difference, because
the program can be regarded as having a single virtual history list or
``journal''.  It is then a virtual side-effect
of actions to record the action in the history list.  Calling the
history list virtual means that it may not actually take that
form in a compiled program which may use conventional arrays and
not record information that isn't going to be referred to.

	Elephant programs refer to the past in a variety of ways.

(more to come)
%Mar 22
%Exit the most recent entry that has not been exited.
%
%Do the earliest task that hasn't been done.
%
%There are event of entry and places of entry.  Likewise for exits.
\section{The Structure of Elephant 2000 Programs}

	Elephant 2000 will be described in terms of an
interpreter.  Compiled programs are to have the same input-output
behavior, but the object program uses ordinary data structures
instead of referring directly to the past.

	We suppose that only one input reaches the program at a time.
The runtime system is supposed to achieve this.  We also suppose
that inputs not of the required form are rejected, so that the
Elephant program itself doesn't have to say what is done with them.

	The program responds to each input as it is received.
Thus it can be regarded as a stimulus-response machine.  However,
in deciding on a response the program may inspect the entire past
of its previous inputs and responses.
\section{Examples of Programs}

1. Here is an airline reservation program accompanied by an
explanation of the notations used.  The identifiers in bold face
are defined in the Elephant language; the others are identifiers
of the particular program.

${\bf accept.request}$ is an action defined in the Elephant language.
It means to do what is requested.

${\bf answer.query}$ is the action of answering a query.

In general, $admit(psgr,flt)$ means admitting the passenger
$psgr$ to the flight $flt$.  It is an action defined in the
particular program.  However, it has two possible interpretations,
(1) telling the agent at the gate to let the passenger in and (2)
actually letting the passenger in.  The first is an illocutionary
act, the second is a perlocutionary act.

	{\bf commitment} is a function taking a future action as
an argument and generating from it an abstract object called a
commitment.  The internal actions {\bf make} and {\bf cancel} and
the predicate {\bf cancel} apply to commitments.  {\bf make},
{\bf exists} and {\bf cancel} are part of the Elephant language
and have meanings and be axiomatized independently of the
particular abstract objects being made, tested and destroyed.
The notion of faithfulness involving the fulfillment of
commitments will not necessarily extend to other abstract
entities.

$${\bf if} ¬full flt {\bf then} {\bf accept.request} {\bf make commitment}
 admit(psgr,flt).$$
%
In Elephant 2000, we associate to the right, so the above statement is
equivalent to
%
$${\bf if} ¬full(flt) {\bf then} {\bf accept.request}({\bf make commitment}
(admit(psgr,flt))).$$
%
%
$${\bf answer.query} {\bf exists commitment} admit(psgr,flt).$$
%
$${\bf accept.request} {\bf cancel commitment} admit(psgr,flt).$$
%
$$\eqalign{{\bf if} {\bf now} = {\bf time} flt
& ∧ {\bf exists commitment} admit(psgr,flt)\cr
&{\bf then} {\bf accept.request} admit(psgr,flt)\cr}.$$
%
$$full flt ≡ card\{psgr | {\bf exists commitment} admit(psgr,flt)\} = capacity flt.$$
%

	Even for this simple reservation program it is
worthwhile to distinguish between illocutionary and perlocutionary
specifications.  The distinction comes up in the interpretation
of $admit(psgr,flt)$.  If we interpret it as ordering the
admission, then it's an illocutionary act.  If we interpret
is as making sure the passenger can actually get on, then it
is a perlocutionary act.

	It is an illocutionary specification of the program
that it not order the admission of more passengers than the
capacity of the flight.  It is an external fact that the
plane will hold its capacity and not more.

2. Similar considerations apply to a minimal program for a control tower.

It receives requests to land.  It can perceive the positions
of the airplanes.  It can issue instructions like ``Cleared to
land'' or ``You are number 3 following the red Cessna'' or
``Extend your downwind leg'' or ``Turn base now''.
Its illocutionary specifications involve only its perceptions
and its inputs and outputs.  They include that it should not
say ``Cleared to land'' to an airplane until the previous
airplane is believed to have actually landed.  The verification of the
illocutionary specifications involve facts about the program
itself.

The perlocutionary specifications provide that the airplanes
should land safely and as promptly as is compatible with safety.
Their verification involves assumptions about the correctness of
the program's perceptions and the behavior of airplanes in
response to instructions.

A specialist in airplanes may verify that if the program
meets its illocutionary specifications, then it will
also meet its perlocutionary specifications.

The concepts of illocutionary and perlocutionary specifications
apply to programs that interact with the outside world in general
and not just Elephant programs.  However, we expect them to be
easier to state and verify for Elephant programs.

\section{Implementation}

	We contemplate that Elephant 2000 will have two kinds
of implementation, interpreted and compiled.

	The interpreted implementation has a single simple data
structure---a history list of events.  Inputs will certainly be
included as events.  In principle, this is all that is required.
However, including actions by the program in the history enables
the interpreter to avoid repeating certain computations.  These
actions are both external and internal.
%  Histories resemble the
%``journals'' kept by some transaction systems.
%%Leora wants a reference.

	The interpreter then examines inputs as they come in and
according to the rules decides what to do with each one.  The
version of Elephant 2000 presently contemplated only provides for certain
requests, etc. and assumes they arrive in some order.  The necessary
synchronization is performed by the runtime system outside of the
program itself, and so is the rejection of inputs that don't
match any program statement.

	Matching event patterns to the history of events is done
explicitly each time an input is interpreted.  The actions that
are logged are those that are part of the Elephant 2000 language.
A simple form of matching is that done by Prolog, and it may
be adequate.

	The interpreter will therefore be somewhat slow, but for
many purposes it will be adequate.

	An Elephant 2000 compiler will translate programs into an
ordinary programming language, e.g.  Common Lisp or C.  It would
put data structures in the object program that would permit
reference to these structures in order to decide what to do and
would update the structures according to the action taken.  In a
full translation, there would be no explicit history in the
object program.
\section{Verification of Elephant 2000 Programs}

	Since Elephant 2000 programs will be sugared versions of
logical sentences,  their properties will be logical
consequences of the sentences expressing the program, the axioms
of Elephant 2000 and a theory of the data domains (if any) of the
program.  In this Elephant resembles Algol 48 and
 Algol 50 described in a later
section.

(more to come)
\section{Levels of Intentionality}

	We discuss philosophical work on speech acts with two objectives.
First, we consider what the computer language use of speech
acts can learn from the extensive work by philosophers.  Second,
considering speech acts as we want computers to do them sheds
light on the philosohical problems.  The two aspects of the philosohical
treatments are so interrelated that we discuss them together.
Here are some remarks.

1. Philosophers treat speech acts as natural phenomena to be
studied.  However, they propose not to treat them from the point
of view of anthropology or linguistics.  Instead they study their
essential characteristics in terms of what they accomplish.  This
makes their work more relevant to computer science and artificial
intelligence than linguistic or anthropological work would be.

2. My view of speech acts is that they are necessary in the {\it common
sense informatic situation} in which people interact with each other
to achieve their goals.  The most important features of this informatic
situation are independent of the fact that we are humans.  Martians or
robots with independent knowledge and goals would also require speech
acts, and many of these would have similar characteristics to human
speech acts.

3. The point of this paper is that speech acts are valuable when we
design computer systems to interact with humans and with each other.

4. However, only some of the characteristics that philosophers have
ascribed to speech acts are valuable for our purposes.  Which ones
they are will depend on the purposes.

5. Besides the speech acts that are common in human society, it is
convenient to invent others.  Indeed human institutions often involve
the invention of speech acts.  An example is the airplane reservation
discussed in this paper.

6. A particular kind of speech act is an entity in an approximate
theory in the sense of (McCarthy 1979).  For this reason attempts
at precise definitions, e.g. of an airplane reservation, are likely
to be beside the point.  Instead we will have nonmonotonic axioms
(McCarthy 1986) that partially characterize them.

7. Regarding speech acts as events of execution of program statements
may be useful for philosophers also.

8. Performatives that are not really speech acts because they don't
result in external output are also useful.  Our main example is the
commitment.  When a program makes a commitment, its correctness
requires the fulfillment of the commitment.

9. A key question we share with philosophers is that of what must
be true in order that a speech act of a given kind be successfully
performed.

10. This paper plays some role in the controversy between John
Searle (1984) and the artificial intelligence community about
whether computers can really believe and know.  As I understand
his position, its extension would say that computers can't really
promise either.  Our position is that Elephant 2000 programs can
perform some kinds of speech acts as genuinely as do humans.  The
difference between what Elephant 2000 programs do and what some
humans to in this respect will be similar to the differences
among humans.  We expect that the programming community will for a
long time be interested in speech acts of a more limited sort than
Searle has discussed.  However, human speech acts when performed
in certain institutional settings also have a limited character.
For example, giving a reservation usually does not involve an
opinion that it will benefit the person to whom it is given.

It's not clear that a difference of opinion on this point has practical
consequences for programming.

11. It will often be possible to regard a program not written in Elephant
as though it were by regarding its inputs as questions and requests and
its outputs as promises, etc.  This is the Elephant analog of Newell's
logic level or my (1979) about ascribing mental qualities to machines.

12. Austin and Searle distinguish {\it illocutionary} from {\it
perlocutionary} speech acts.  An example is that ordering someone
to do something is illocutionary, but getting him to do it is
perlocutionary.  The same sentence may serve as both, but the
conditions for successful perlocutionary acts don't just involve
what the speaker says; the involve its effect on the hearer.
Both philosophers mention difficulties in making the distinction
precise, but for the purposes of Elephant it's easy.

	The correctness conditions for an illocutionary act
involve the state of the program and its inputs and outputs.  The
correctness conditions for a perlocutionary act depends also on
events in the world.  An airline reservation program may
reasonably be specified in terms of the illocutionary acts it
performs, i.e.  by its inputs and outputs, whereas the
correctness of an air traffic control program is essentially
perlocutionary, because stating the full correctness of the
latter involves stating that it prevents the airplanes from
colliding.

	The same program may have two kinds of specifications
and consequently two notions of correctness---illocutionary
and perlocutionary.  The illocutionary specifications of a
program are in terms of its outputs as functions of the
inputs.  The perlocutionary specifications are in terms
of what the program accomplishes in the world.  It may
be useful to give both kinds of specification for the
same program and consider their relations.  Proving that
a program meets its perlocutionary specifications requires
assumptions about the world.  It may be useful to prove for
a given program that if it meets its illocutionary specifications
it will also meet its perlocutionary specifications.  For
example, we would use our assumptions about the behavior
of airplanes and pilots to show that if an air traffic
control program emitted outputs in accordance with its
specifications, then the airplanes wouldn't collide.

	In this connection it may be worthwhile to go beyond
philosophical usage and apply the term {\it perlocutionary} to inputs as
well as outputs.  Namely, perlocutionary conditions on the inputs
state that they give facts about the world, e.g. the locations
of the airplanes.  The correctness of an air traffic control
program depends on assumptions about the correctness of the inputs
as well as on assumptions about the obedience of the pilots and
the physics of airplane flight.
\section{Connections with Artificial Intelligence}

	The full use of speech acts requires intelligence on
the human level.  Artificial intelligence has made progress,
and there is a useful expert systems technology based on this
progress, but human-level intelligence is still an unknown
distance ahead of us.  Therefore, it is important to analyze
the proposed uses of speech acts and decide how much intelligence
is required for each use.  It is a contention of this paper
that many uses of speech acts do not require many of the
intellectual capabilities of humans.  We further hope that
Elephant 2000 will permit using whatever level of AI is
feasible for the designers of any particular system.  Care
will be needed to ensure that a program doesn't require
too much intelligence of the programs with which it interacts.

(more to come)
\section{Domain of Application}

	A very large fraction of office work involves
communicating with people and organizations outside the office
and whose procedures are not connrolled by the same authority.
Getting the full productivity benefit from computers depends on
computerizing this communication.  This cannot be accomplished by
 approaches to distributed processing which
assume that the communicating processes are designed as a group.

	Automating such communication was easiest when it is
absolutely standard what is communicated and when one of the
organizations is in a position to dictate formats.  For example,
in the 1950s the IRS dictated a magnetic tape format on which it
was prepared to receive reports of wages and deductions.

	 The initial approaches to electronic
data interchange, EDI, have involved exchanges of information
between a large manufacturer and its suppliers, where the
customer could dictate the form of the interchange.
This works best when the supplier has only this one
customer.

	A next step is provided by standards like X12 which
provide standard electronic formats for a fixed set of commercial
documents, e.g. invoices.  Presumably, X12 forms have fixed
collections of slots in which can be inserted numbers, e.g.
quantities and prices, or labels, e.g. names of people, places
and things.

	(McCarthy 1982) proposes a ``Common Business Communication
Language'' that would provide for a language of business
messages.  These messages could include complex descriptions of
various entities, e.g. price formulas, schedules of delivery,
schedules of payment, configurations of equipment and terms
for contracts.  That paper envisages that firms may issue
electronic catalogs and advertisements that cause thems
to be listed automatically as suppliers of certain items.
Programs looking for good prices and delivery conditions for
the items might automatically negotiate with the programs
representing the sellers and even issue purchase orders within
their authority to do so.
  (McCarthy 1982) did not discuss the programs
that would use CBCL.

	Programs that carry out external communication need several
features that substantially correspond to the ability
to use speech acts.  They need to ask questions, answer them,
make commitments and make agreements.  They need to be able
communicate assurance that their agreements will be honored
by the organizations they belong to.  This is best assured by
some kind of authority tree extending up from programs that
issue purchase orders through the programs that decide what
items are to be purchased and through the human hierarchy
of the organization.

	Because these programs will often operate with only
minimal human supervision, they need to be carefully verified.
The features of Elepant 2000 are important for this, because
the forms of speech act provided for have definite meanings.
They will also need frequent modification, often by people
other than those who wrote them---by non-programmers as much
as possible.

	As people acquire more home computers and use them more
and more for doing business with firms, transaction processing
programs will become more important, and will be more used by
people other than employees of the organizations operating the
programs.  This will also put requirements on verification
and modifiability.

	Present reservation programs have many limitations that
using Elephant 2000 features will encourage correcting.  Often
they don't even emit strings that are supposed to have long term
meaning.  Instead they emit display updates.  These display
updates cannot be conveniently used by programs belonging to
others, because changes in output intended to make the display
prettier can destroy the ability of other programs to decipher
them.  The result is that when a ticket has to be changed, the
airline counter clerk often retypes the name of the passenger,
because it cannot be taken from the screen showing the
reservation.  Elephant speech acts are a step above strings,
because the higher levels of the speech acts have a meaning
independent of the application program.
\section{Algol 50}

	This section is a warm-up for the next section.

	We introduce the ``programming languages'' Algol 48 and
Algol 50 to illustrate in a simpler setting some ideas to be used
in Elephant 2000.  These are the explicit use of time in a
programming language and the representation of the program by
logical sentences.  The former permits a direct expression of
the operational semantics of the language, and
the latter permits proofs of properties of
programs without any special theory of programming.  The
properties are deduced from the program itself together with
axioms for the domain.

	We use these names, because the languages cover much
of the ground of Algol 60 but use only a mathematical formalism---
old fashioned recursion equations---that precedes the development
of programming languages.  They are programming languages I imagine
mathematicians might have created in 1950 had they seen the need
for something other than machine language.  Algol 48 is a preliminary
version of Algol 50 just as Algol 58 was a preliminary version of Algol 60.

	Consider the Algol 60 fragment.

%$$\eqalign{
%start:\quad &p := 0;\cr
%	&i := n;\cr
%loop: &{\bf if} i=0 {\bf then go to} done;\cr
%	&p := p+m;\cr
%	&i := i-1;\cr
%	&{\bf go to} loop;\cr
%done:\cr}.$$
%
% This is Carolyn's incantation for programs
$$\vtop{\openup\jot\ialign{&$#$\hfil\cr
start:\quad &p := 0;\cr
	&i := n;\cr
loop: &{\bf if} i=0 {\bf then go to} done;\cr
	&p := p+m;\cr
	&i := i-1;\cr
	&{\bf go to} loop;\cr
done:\cr}}.$$

	The program computes the product $mn$ by initializing a
partial product $p$ to $0$ and then adding $m$ to it $n$ times.
The correctness of the Algol 60 program is represented by the
statement that if the program is entered at $start$ it will reach
the label $done$, and when it does, the variable $p$ will have
the value $mn$.  Different program verification formalisms
represent this assertion in various ways, often not entirely
formal.

	Its partial correctness is conventionally proved by
attaching the invariant assertion $p=m(n-i)$ to the label $loop$.
Its termination is proved by noting that the variable $i$ starts
out with the value $n$ and counts down to 0.  This proof is
expressed in various ways in the different formalisms for
verifying programs.

	In Algol 48 we write this algorithm
as a set of old fashioned recursion equations for three functions
of time, namely $p(t)$, $i(t)$ and $pc(t)$, where the first two
correspond to the variables in the program, and $pc(t)$ tells how
the ``program counter'' changes.  The only ideas that would have been
unconventional in 1948 are the explicit use of a program counter and
the conditional expressions.  We have
%
$$\eqalign{p(t+1) = &{\bf if} pc(t) = 0 {\bf then} 0\cr
&{\bf else if} pc(t) = 3 {\bf then} p(t)+m\cr
&{\bf else} p(t)},$$
%
$$\eqalign{i(t+1) = &{\bf if} pc(t) = 1 {\bf then} n\cr
&{\bf else if} pc(t) = 4 {\bf then} i(t)-1\cr
&{\bf else} i(t)},$$
%
and
%
$$\eqalign{pc(t+1) = &{\bf if} pc(t) = 2 ∧ i(t)= 0 {\bf then} 6 \cr
&{\bf else if} pc(t) = 5 {\bf then} 2 \cr
&{\bf else} pc(t)+1}.$$

	The correctness of the Algol 48 program is represented by
the sentence
%
$$∀n(n≥0 ⊃ ∀t(pc(t) = 0 ⊃ ∃t'(t' > 0 ∧ pc(t') = 6 ∧ p(t') = mn))).$$
%
This sentence may be proved from the sentences representing the
program supplemented by the axioms of arithmetic and the axiom
schema of mathematical induction.  No special theory of programming
is required.  The easiest proof uses mathematical induction on $n$
applied to a formula involving $p(t) = m(n-i(t))$.

	Algol 48 programs are organized
quite differently from Algol 60 programs.  Namely, the changes to
variables are sorted by variable rather than sequentially by
time.  However, by reifying variables, Algol 50 permits writing
programs in a way that permits regarding programs in this
fragment of Algol 60 as just sugared versions of Algol 50
programs.

	Instead of writing $var(t)$ for some variable $var$, we
write $value(var,\xi(t))$, where $\xi$ is a state vector giving
the values of all the variables.  In the above program, we'll
have $value(p,\xi(t))$, $value(i,\xi(t))$ and $value(pc,\xi(t))$.

	The variables of the Algol 60 program correspond to
functions of time in the above first Algol 50 version and
become distinct constant symbols in the version of Algol
50 with reified variables.
Their distinctness is made explicit by the ``unique names'' axiom

$$i≠p∧i≠pc∧p≠pc.$$

	In expressing the program we use the assignment and
contents functions, $a(var,value,\xi)$ and $c(var,\xi)$, of
(McCarthy 1963) and (McCarthy and Painter 1967).
$a(var,value,\xi)$ is the new state $\xi'$ that results when the
variable $var$ is assigned the value $value$ in state $\xi$.
$c(var,\xi)$ is the value of $var$ in state $\xi$.

	As described in those papers the functions $a$ and $c$
satisfy the axioms.
%
$$c(var,a(var,val,\xi)) = val,$$
%
$$var1 ≠ var2 ⊃ c(var2,a(var1,val,\xi)) = c(var2,\xi),$$
%
$$a(var,val2,a(var,val1,\xi)) = a(var,val2,\xi),$$
%
and
$$var1 ≠ var2 ⊃ a(var2,val2,a(var1,val1,\xi)) = a(var1,val1,a(var2,val2,\xi)).$$

	The following definitions
shorten the expression of programs.

$$step(\xi) = a(pc,value(pc,\xi)+1,\xi),$$
%
$$goto(label,\xi) = a(pc,label,\xi).$$

	We make the further abbreviation $loop = start+2$ specially for this
program, and with this notation our program becomes
%
$$\eqalign{∀t(\xi(t+1) = &{\bf if} c(pc,\xi(t)) = start\cr
&\quad{\bf then} step a(p,0,\xi(t))\cr
&{\bf else if} c(pc,\xi(t)) = start+1\cr
&\quad{\bf then} step a(i,n,\xi(t))\cr
&{\bf else if} c(pc,\xi(t)) = loop\cr
&\quad{\bf then} ({\bf if} c(i,\xi(t)) = 0 {\bf then} goto(done,\xi(t)) {\bf else}
step \xi(t))\cr
&{\bf else if} c(pc,\xi(t)) = loop + 1\cr
&\quad{\bf then} step a(p,c(p,\xi(t))+m,\xi(t))\cr
&{\bf else if} c(pc,\xi(t)) = loop +2\cr
&\quad{\bf then} step a(i,c(i,\xi(t))-1\xi(t))\cr
&{\bf else if} c(pc,\xi(t)) = loop +3\cr
&\quad{\bf then} goto(loop,\xi(t))\cr
&{\bf else} \xi(t+1))\cr}
$$

	In Algol 50, the consequents of the clauses
of the conditional expression are in 1-1 correspondence with the
statements of the corresponding Algol 60 program.  Therefore, the
Algol 60 program can be regarded as an abbreviation of the
corresponding Algol 50 program.  The (operational) semantics of
the Algol 60 program is then given by the sentence expressing the
corresponding Algol 50 program together with the axioms
describing the data domain, which in this case would be the Peano
axioms for natural numbers.  The transformation to go from Algol
60 to Algol 50 would be entirely local, i.e. statement by
statement, were it not for the need to use statement numbers
explicitly in Algol 50.

	Program fragments can be combined into larger fragments
by taking the conjunction of the sentences representing them,
identifying labels where this is wanted to achieve a {\bf go to}
from one fragment to another and adding sentences to make sure
that the program counter ranges don't overlap.

	The correctness of the Algol 50 program
is expressed by
%
$$\eqalign{∀t \xi↓0(c(pc,\xi(t))=start& ∧ \xi(t) = \xi↓0\cr
&⊃ ∃t'(t' > t ∧ c(p,\xi(t')) = mn\cr
&\quad ∧ c(pc,\xi(t')) = done\cr
&\quad ∧ ∀var(¬(var \in \{p,i\}) ⊃ c(var,\xi(t')) = c(var,\xi↓0))))\cr}$$

	Note that we quantify over all initial state vectors.  The
last part of the correctness formula states that the program fragment
doesn't alter the state vector other than by altering $p$ and $i$.

	We have not carried the Algol 50 idea far enough to
verify that all of Algol 60 is conveniently representable in the same style,
but no fundamental difficulties are apparent.  In treating
recursive procedures, a stack can be introduced, but it would be
more elegant to do without it by explicitly saying that the
return is to the statement after the corresponding procedure call
and variables are restored to their values at the time of the
call.  This requires the ability to parse the past, needed also
for Elephant 2000.

	We advocate an extended Algol 50 for expressing the
semantics of Algol-like programming languages, although our
present purpose was just to use it as an illustration in a
simpler setting of features that Elephant will require.

	Nissim Francez and Amir Pnueli (see references) used an
explicit time for similar purposes.  Unfortunately, they abandoned
it for temporal logic.
\section{Elephant Programs as Sentences of Logic}

	This is the most tentative section of the present draft.
At present we have two approaches to writing Elephant programs
as sentences of logic.  The first approach is analogous to Algol 50.
It is based on updating a state of the program and a state of the
world.  Of course, the functions updating the state of the world will
be only partially known.  Therefore, unknown functions will occur, and
the knowledge we have about the world will be expressed by subjecting
these functions to axioms.  The second approach expresses the program
and tentatively what we know about the world in terms of events.
The events occur at times determined by axioms.  We describe the
events that we assert to occur and rely on circumscription to limit
the set of occurrences to those that follow from the axioms given.
We begin with the Algol 50 approach.

	As with Algol 50, in our first approach we use a state
vector $\xi(t)$ for the state of the program during operation.
The program is expessed by a formula for $\xi(t+1)$.  Since the
program interacts with the world, we also have a state vector
$world(t)$ of the world external to the computer.  Because we
can't have complete knowledge of the world, we can't expect to
express $world(t+1)$ by a single formula, but proving
perlocutionary specifications will involve assumptions about the
functions that determine how $world(t)$ changes.

	We have
%
$$\xi(t+1) = update(i(t),\xi,t),$$
%
$$i(t) = input world(t),$$
%
and
%
$$world(t+1) = worldf(output \xi(t),world,t).$$
%
Notice that we have written $\xi$ and $world$ on the right sides
of the two equations rather than $\xi(t)$ and $world(t)$, which
might have been expected.  This allows us to let $\xi(t+1)$ and
$world(t+1)$ depend on the whole past rather than just the present.
(It would also allow equations expressing dependence on the future.)

	We have written part of the reservation program in this form,
but it isn't yet clear enough to include in the paper.

	The second approach uses separate functions and predicates
with time as an argument.  In this respect it resembles Algol 48.

 $$\eqalign{(∀psgr flt t)&(input t = request make commitment admit(psgr flt)\cr
	&\quad\quad ∧ ¬ holds(t,full flt)\cr
&⊃ (∃ seat) (holds(t,available(seat,flt)) ∧\cr
&\quad arises(t, commitment admit(psgr,flt, seat))\cr
	&\quad ∧ outputs(t,promise admit(psgr,flt,seat)))).\cr}$$

Note the use of $admit$ first with two argments and later with three.
We suppose that $admit$ actually has a large set of arguments with
default values.  When an $admit(\ldots )$ expression is encountered,
it is made obvious which arguments are being given values by the
expression and which get default values.  The signals for this are
the name of the variable or an actual named argment as in Common
Lisp.  The compiler will have to come up with a way of assigning
a seat from those available.

$$\eqalign{
(∀psgr flt t) &(input t = query exists commitment admit(psgr flt)\cr
&⊃ outputs(t,if exists(t, commitment admit(psgr flt)) then\cr
&confirm(commitment admit(psgr flt seat)) else deny(input t))\cr}$$

$$\eqalign{
(∀psgr flt t) &(input t = request cancel commitment admit(psgr,flt)\cr
&⊃ revoke(t,commitment admit(psgr,flt))\cr}$$

$$\eqalign{
(∀t x)(&exists(t,commitment x) ≡ (∃t')(t' < t ∧ arises(t, commitment x))\cr & ∧
(∀t'')(t' < t'' < t ⊃ ¬revoke(t, commitment x)))\cr}$$

$$\eqalign{
(∀psgr flt t) &(input t = request admit(psgr,flt)\cr & ⊃
if exists(t, commitment admit( psgr,flt)\cr &\quad\quad then outputs(t,
command(agent,admit(psgr,flt,seat))\cr &\quad \quad else outputs(t, command(agent,
don't admit (psgr,flt))))\cr}$$

	Asserting that certain outputs occur and that certain propositions
hold doesn't establish that others don't occur.   Therefore, the program
as given is to be supplemented by circumscribing certain predicates,
namely $arises$, $outputs$ and $revoke$.

Remarks:

	1. Proving that a program fulfills its commitments
seems to be just a matter of expressing a commitment as making
sure a certain sentence is true, e.g. that the passenger is
allowed on the airplane.  In that case, proving that the program
fulfills its commitments is just a matter of showing that the
sentences expressing the commitments follow from the sentence
expressing the program.  The problem now is to decide what
class of sentences to allow as expressing various kinds of
commitments.  If the commitments are to be externally
expressed as promises, then they have to belong to the
i-o language.

	2. Commitments are like specifications, but they are
to be considered as dynamic, i.e. specific commitments are
created as the program runs.  We can ask what are the
program's commitments when it is in a given state.  Indeed
we might even be able to ask the program what its commitments
are.

	3. An Elephant interpreter need only match the inputs
against the program statements with inputs as premisses.  It then
issues the outputs, performs the actions and asserts the other
conclusions of the statement.  The circumscriptions should not
have to be consulted, because they can be regarded as asserting
that the only events that occur are those specified in the
statements.  Here the situation is similar to that of logic
programming.  The circumscriptions {\it are} used in proving that
the program meets its specifications, e.g. fulfills its commitments.

	There is a theorem about this that remains to be
precisely formulated and then proved.  Making it provable
might involve some revisions of the formalism.

\section{Remarks}

	Many kinds of human speech act are relative to social
institutions that change.  For example, a challenge to a duel
in societies where dueling was customary
was not just an offer.  It generated certain obligations on
the part of the challenger, the person challenged and the
community.  Suppression of dueling was accomplished partly
by intentional changes in these institutions.
The exchange of speech acts among computer programs will
often involve the design of new institutions prescribing
the effects of speech acts.  For example, the kinds of
external obligation created by business promises is partially
specified by the Uniform Commercial Code that many states
have enacted into law.  Programs that make commitments will
have to be specified in some way that corresponds to
something like the Uniform Commercial Code.  The point is
to be able to prove that (subject to certain assumptions)
the programs do what is legally required.

	Perhaps we will need three levels of specification,
internal, illocutionary and perlocutionary.  Internal specifications
may involve computing a certain quantity, regardless of whether
output occurs.

	We may need to consider joint speech acts such as making
an agreement.  For some purposes an agreement can be considered
as an offer followed by its acceptance.  However, we may know
that two parties made an agreement without knowing who finally
offered and who finally accepted.

	It seems that (Searle and Vanderveken 1985) improves on
(Searle 1969) in distinguishing successful and non-defective
performance of speech acts.  It isn't yet clear whether these
distinctions will play a role in computer speech acts.  Perhaps
the logic of illocutionary acts proposed in that book will be
useful in writing specifications and proving that programs meet
them.

	Writing this paper began with the simple notion of a program
that makes commitments and the notion of proving that it fulfills
them.  Then it became interesting to consider
speech acts with more elaborate correctness conditions.  However,
we expect that the simple cases will be most useful in the
initial applications of Elephant 2000 and similar languages.

(more to come)
\section{Related Work}

	This section will relate Elephant to work by Leora Morgenstern,
Hector Levesque, Raymond Perrault and Philip Cohen among others.

% , MSG.MSG[JNK,JMC]/566P/33L
\section{Acknowledgements}

	I am indebted to
Vladimir Lifschitz,
Leora Morgenstern
and
Carolyn Talcott
for useful suggestions.

Support for this work was provided by the Information Science and
Technology Office of the Defense Advanced Research Projects Agency.
\section{References}

\noindent {\bf Austin, J. L. (1962)}: {\it How to Do Things with Words},
Oxford.

\noindent {\bf Francez, Nissim and Amir Pnueli (1978)}: ``A Proof Method for Cyclic
Programs'', {\it Acta Informatica} 9, 133-157.

\noindent {\bf Francez, Nissim (1976)}: {\it The Analysis of Cyclic Programs}, PhD
Thesis, Weizmann Institute of Science, Rehovot, Israel.

\noindent {\bf Francez, Nissim (1978)}: ``An Application of a Method for
Analysis of Cyclic Programs'', {\it IEEE Transactions on Software
Engineering}, vol. SE-4, No. 5, pp. 371-378, September 1978.

\noindent {\bf Grice, Paul (1989)}: {\it Studies in the Way of Words},
Harvard University Press.  This is a collection of his papers.

\noindent {\bf McCarthy, John (1963)}: ``Towards a Mathematical Theory of Computation'',
in Proc.  IFIP Congress 62, North-Holland, Amsterdam.

\noindent {\bf McCarthy, John (1967)}:  ``Correctness of a Compiler for Arithmetic
Expresions'' (with James Painter),
{\it Proceedings of Symposia in Applied Mathematics, Volume XIX},
American Mathematical Society.

\noindent {\bf McCarthy, John (1979)}:
``Ascribing Mental Qualities to Machines'' in {\it Philosophical Perspectives 
in Artificial Intelligence}, Ringle, Martin (ed.), Harvester Press, July 1979.

\noindent {\bf McCarthy, John (1982)}: ``Common Business Communication Language'', in
{\it Textverarbeitung und B\"urosysteme}, Albert Endres and J\"urgen Reetz, eds.
R. Oldenbourg Verlag, Munich and Vienna 1982.
%  cbcl[f75,jmc]

\noindent {\bf McCarthy, John (1986)}:
``Applications of Circumscription to Formalizing Common Sense Knowledge''
{\it Artificial Intelligence}, April 1986

\noindent
{\bf Newell, Allen (1981)}: ``The Knowledge Level,'' {\it AI Magazine\/},
Vol. 2, No. 2

\noindent {\bf Searle, John R. (1969)}: {\it Speech Acts}
Cambridge, Eng., Univ. Press.

\noindent{\bf Searle, John R. (1984)}: {\it Minds, Brains, and Science},
Cambridge, Mass. : Harvard University Press, 1984.

\noindent {\bf Searle, John R. and Daniel Vanderveken (1985)}: {\it Foundations of
Illocutionary Logic}, Cambridge, Eng., Univ. Press.
\smallskip\centerline{Copyright \copyright\ 1989\ by John McCarthy}
\smallskip\noindent{This draft of ELEPHA[S89,JMC]\ TEXed on \jmcdate\ at \theTime}
%File originated on 10-Jun-89
\vfill\eject\end
leftovers

  Also only parts of the program ``authorized'' to
perform certain actions do so.

	This paper is dedicated to the memory of a recent
insufficiently ambitious talk on Programming Languages of the
Year 2000.  There are two contentions.  First, speech acts
are appropriately understood as parts of procedures rather
than as logical sentences.  Second, abstract speech acts
are useful constructs in more advanced programming languages,
leading to programs with expressiveness closer to that of one
person instructing another.

Should E make or accept offers?

But what about consing up a seat assigment?

%
$$\eqalign{input(t,&request(reservation(flight,date,person)))\cr
&∧ card \{x|is\hbox{-}reservation(x,flight,t)\} < capacity(flight)\cr
⊃ output&(t,promise(reservation(flight,date,person))).\cr}$$

	I considered using the terms {\it internal} and {\it external}
instead of {\it illocutionary} and {\it perlocutionary}, but it seemed
that these gave opportunities for confusion.  Therefore, I'm sticking
to the philosophical jargon even though it involves a considerable
extension of the philosophical usage which applies to single statements.

	How should we refer to the past.  Here are some alternatives.

	1. Use temporal modal operators like {\bf P} and {\bf F} to
assert that an event occured in the past or future.  These can be
compounded.  This has been much advocated for proving programs correct,
but it hasn't been used in programming languages themselves.  Anyway
the modal operators seem too weak for our purposes.
%% Leora suggests more justification.

	2. Use quantifiers over times.  Here's an example.
%
$$\eqalign{∀psgr flt t(has.&reservation(psgr,flt,t)\cr
≡ ∃t'&(t' < t ∧ made.reservation(psgr,flt,t')\cr
&∧ ∀t''(t' < t'' ∧ t'' < t ⊃ ¬cancelled.reservation(psgr,flt,t'')))).\cr}$$
%
It asserts that a passenger has a reservation if there is a time
at which one was made for him and which was not followed by a
subsequent cancellation.

	I believe that quantification over time is also
insufficiently expressive, and I have two examples that indicate
it.

	2.1. Consider an Elephant 2000 interpreter for a language
that provides for recursive subroutines.  We would like to say
that when a program exits a subroutine, control passes to the
statement after the {\it corresponding} entry to the subroutine.
This involves a structure of ``parentheses'' in the history where
an entry is a left parenthesis and an exit is a right
parenthesis.  We need to identify the left parenthesis
corresponding to a given right parenthesis.  I believe
quantification over sets of times or list structures of times
will work here.

	2.2. Suppose someone is paid by the hour and works a
variable number of intervals during a pay period and that there
is no a priori bound on the number of intervals.  His pay is
proportional to the sum of the lengths of the intervals.  Proving
that the recursive formula for his pay over a pay period is
additive in the pay periods also seems to require quantification
over list structures with times as some of the leaves.

	However, quantificational formulas like the above seem
somewhat awkward, and it may not be obvious how to define an
adequate class of formulas whose members are readily compiled in
code.

	3. Therefore, we have decided tentatively to use another
idea---pattern matching on the history list using tools like those
of LISP macros or of Prolog programs.  Here's a pseudo-Prolog
program for determining whether a passenger has a reservation.
%
$$\eqalign{hasreservation&(Psgr,Flt,Seat,history)\cr
 :- &finalseg(history,X.U),\cr
&ismakrev(X,Psgr,Flt,Seat),\cr
¬ (∃Y)(member(Y,U),cancelsrev(Y,X).\cr}$$
%
Here  $finalseg$ is a predicate asserting that its second argument is
a final segment of the list comprising its first argument.  $X.U$ is
the Lisp $cons$, also used in some Prolog versions.  The quantifier
part is what is most non-standard, but it is in accordance with
(McCarty 1989).

	Why allow direct reference to the past?

	1. Humans do it, so it may be convenient.  We'll see.
%%Leora: because of partial knowledge should be useful for program
%%for the same reason.

	2. Most work programmers do is revising programs rather than
writing them {\it ab initio}.  Much of this work involves understanding
the program to be revised.  We would like to be able to revise programs
without reading them, at least reading as little as possible.
If the original programmer designed data structures, it is hardly
going to be possible to revise the program without reading descriptions
of the data structures whose design may require revision.  However,
if memory takes the form of reference to past events, there is a better
chance that the events will not require revision.
The airline reservation programs of this paper give reason to hope
that this strategy will work.

	3. We needn't be fanatical about this.  Elephant 2000 can include
arrays and have actions that stores information in them.  If we regard
storing information in an array as an event, we can have it both ways.

\section{What is an Airplane Reservation?}

	An airplane reservation includes a commitment by the
airline to allow the passenger on the flight unless the
reservation is cancelled or the airline has another reason to let
the passenger on.  It also includes a commitment to pay denied
boarding compensation if the passenger is not allowed on the
plane unless the airline has a reason to do otherwise.  There are
also certain obligations associated with helping the passenger
complete his trip if the flight is cancelled or if an incoming
connecting flight is too late to make the connection.

	Actually, there are two kinds of commitment---internal
and external.  Keeping internal commitments is a specification
to be verified.  External commitments are part of the business
practice of the owner of the program and may have legal
and public relations consequences.

	However, we probably don't want to identify the
reservation with either kind of commitment.  Included in the
reservation may be other decisions by the airline that aren't
commitments to the passenger who may not even know about them.

	Someone might say, ``What good is such a commitment
if it can be cancelled at any time by the airline for any reason?''
In fact, such commitments are quite useful compared to not having
them.  The reason is that airlines are not in business to
systematically avoid honoring reservations.
  One might try to define precisely what commitment there is,
but I don't think that's the right idea.  Instead we rely on
nonmonotonic reasoning.  The commitment is valid unless there is
specific reason not to honor it.  This reason could even be local
to a particular airport and maybe to a particular shift supervisor.

	Thus instead of trying to define a reservation precisely,
we regard it as an abstract object.  Airline reservations can be
requested, can be given, can be cancelled, can be honored and can
fail to be honored.

	Searle (1969) treats promises, etc. as ``institutions''
with rules.  Airline reservations should be regarded as
institutions, ephemeral and changeable, to be sure.
\section{Additional Features}

1. Authorization.  We may want to include in the statement of
an Elephant 2000 program statements of what it is authorized to
do.  If we do that, there will be a sentence asserting that
the program does only what it is authorized to do.  This may
include authorizing people and other programs to perform certain
actions.

2. Set theory.  In referring to the past, we propose to allow the
full resources of set theory including the comprehension schema
that allows defining sets in terms of properties.

3. Contexts.  We propose to allow consider each statement as
defining a context in which abbreviated terms can be used.
%%Leora wants more.

${\bf commit}$ is an action of the Elephant language.  It means to commit the
program to do its argument in the future.
%%Leora: Don't you often commit to a particular time in the future?
%% Don't you need explicit references for this?

The condition
$full flt$ refers to the particular flight.  A context mechanism
to be explained later makes it possible for the interpreter to take
the unexplained occurrence of $flt$ as referring to the request
that appears later in the statement.
%%Leora: context not explained.

$${\bf if} ¬full flt {\bf then} {\bf accept.request} {\bf commit} admit(psgr,flt)$$
%
This statement says that if the flight is not full the program should
accept a request to commit itself to admit the passenger to the flight.
%
$${\bf answer.query} {\bf committed} admit(psgr,flt)$$
%
The program should answer a query as to whether it is committed to
admit a particular passenger to a particular flight.
%
$${\bf accept.request} {\bf decommit} admit(psgr,flt)$$
%
It should accept a request to cancel a reservation.
%
$$\eqalign{{\bf if} {\bf now} = {\bf time} flt
& ∧ {\bf committed} admit(psgr,flt)\cr
&{\bf then} {\bf accept.request} admit(psgr,flt)\cr}$$
%
If the passenger requests to be admitted to the flight at the time of the
flight, the program should do it.
%
$$full flt ≡ card\{psgr | {\bf committed} admit(psgr,flt)\} = capacity flt$$
%
This is a definition of the flight being full.

${\bf accept.request}$ is an action defined in the Elephant language.
It means to do what is requested.

${\bf commit}$ is an action of the Elephant language.  It means to commit the
program to do its argument in the future.
%%Leora: Don't you often commit to a particular time in the future?
%% Don't you need explicit references for this?

$admit(psgr,flt)$ means admitting the passenger $psgr$ to the flight
$flt$.  It is an action defined in the particular program.

The condition
$full flt$ refers to the particular flight.  A context mechanism
to be explained later makes it possible for the interpreter to take
the unexplained occurrence of $flt$ as referring to the request
that appears later in the statement.
%%Leora: context not explained.

${\bf committed}$ is a predicate that tells whether a certain commitment
exists.  The interpreter must examine the past to determine whether
a commitment has been made.

${\bf decommit}$ is an action cancelling a commitment.

The definition of the predicate $full$ as applied to flights uses
set theoretic notation.
\section{Speech Acts aka Performatives}

	Linguists and philosophers of language have classified
and studied non-declarative sentences.  Linguistic attention is
drawn to some kinds of non-declarative sentences because they
have a different syntax from declarative sentences in the
grammars of natural languages.  These include questions and
imperative sentences.  Others, it has been pointed out, need to
be regarded as semantically (or pragmatically) non-declarative
even though they have declarative grammatical forms.  An example,
is the sentence, ``I now pronounce you man and wife''.  It isn't
just a statement of fact, because uttered by a suitable person
under suitable circumstances creates a state of marriage and
gives the two parties the rights and duties thereof.

	These sentences are sometimes called {\it performatives} or
and sometimes {\it speech acts}, so far as I understand
it, whether they are being studied syntactically or pragmatically.
The performatives include questions, commands, requests, ordinary assertions
promises and others like the above marriage example.  It seems to
me that the marriage example might be called a declarative if
the term hadn't been pre-empted.

	The Gaw of Speech Acts is the Berkeley philosopher
John Searle (19xx).  I don't know if he would approve of
what follows.

	As is usual in science and philosophy, speech acts have
been studied entirely (are there exceptions?) as natural
phenomena which are to be observed and characterized.  {\it What
kinds of speech acts are there, and what are their properties?}
As is common in engineering, mathematics and AI, we ask a
different question.  {\it What good are speech acts, and which
kinds should be designed for what purposes?}  Of course, these
studies will interact.

	Here's the idea.  Suppose we include in a programming
language a notion of {\it performative} and give them a distinguished
syntax that permits compilers, program transformers, and proof-checkers
to recognize the different kinds of performatives and parse their
structures.  We can then give these performatives a certain
``semantics'', where we mean semantics in the sense of programming
language semantics, which often corresponds with what linguists
would call pragmatics.  Once we have introduced this semantics
we get certain ``internal'' notions of correctness.  A
 {\it performatively correct} program answers questions
truthfully, carries out its promises, fulfills its commitments,
 obeys properly
authorized commands, obeys restrictions imposed on
its behavior by commands and by ``speech acts''
analogous to the act of marriage.

	The expected benefit of admitting speech acts in
programming languages or even  basing languages on
speech acts is the same as those claimed for other higher
level language constructs.  The programmer's intention
is more clearly represented in the program, the program
is easier to understand and to modify, and the notion
of its correctness is more definite and readily proved.
We also expect programs to be shorter.

	The present paper introduces two other innovations.
The first seems essential to take proper advantage of
including speech acts, and the second makes the programs
shorter.  In the preliminary version of Elephant 2000, we
will not worry about efficient compilability, or even about
compilability at all.  We concentrate on features that
permit the programmer to express what he wants the program
to do.

\section{Reference to the Past}

	

\section{Remarks}

1. The philosophical notion of speech act regards them as natural
kinds whose nature is to be discovered.  This paper treats speech
acts as something to be invented.  The question is what sorts of
speech acts are useful.  Naturally, contemplating human usage is
helpful in deciding what formalisms to introduce.

\noindent Notes:

How should our performatives differ from those of natural language?

1. Our performatives are abstract.  We define
promises without saying how they are to be expressed.  Later
we may choose one or more concrete representations.

What is a reservation?
	It is a commitment to let the passenger sit in a particular
seat on a particular flight.  Perhaps giving the passenger the
reservation consists of truthfully informing him that the system
has made this commitment.  Not quite.  Giving the passenger the
reservation also includes assuming a legal obligation not to
cancel the reservation except for an allowed reason.
Maybe we can regard assuming the legal obligation as an
internal act, and the external act consists in telling the
passenger truthfully that the obligation has been assumed.
Remember that the passenger will normally not know the exact
obligation that has been assumed.  This is another reason
for keeping the notion of reservation abstract.

request reservation(<flight>,<day>)

2. The program should {\it ask for} the information it needs.

3. People have asked whether Elephant is a programming language
or a form of specification.  It is a programming language in
that a compiler can generate a machine language program
from an Elephant program.  However, Elephant lacks a major
quality of ordinary progamming languages.  There is no longer
any guarantee that the program will function correctly.
This is because the process of compilation involves
nonmonotonic inference on the part of the compiler.
The compiler generates assumptions and the correctness
of the compiled program depends on the assumptions being
true.  The compiler will report the assumptions it made,
and if the user doesn't like them, he can say $maybe(not p)$,
in his next version forcing the compiler to generate code
that takes into account the new possibility.  For example,
the compiler will often assume that necessary information
is present.  If told that it might not be present, it will
generate a program that asks for it.

	One of the ideas that came from trying to invent
Elephant is that the above property may be inevitable in
trying to make programming languages that come closer to
the human ability to describe procedures and to describe
modifications to procedures.

	3. The statements include the resources of set theory,
i.e.  the ability to refer to sets defined by properties.  Sets
of times and events are used often.

We need to provide for committed future actions that are not performed
in response to an input.  They might be performed at a promised
time or they might be started by a request and continue
for a long time.

Martin Abadi suggested a problem about delegation of authority, MSG.MSG[1,JMC]/343P/40L
w90.in[let,jmc]/343p

The speech act theorists may have missed the following.  Not only do
speech acts have non-propositional content, but so does mental state.
A person's mental state includes not only his beliefs and knowledge
but also his intentions, his authorizations, his obligations and
his generalized accounts receivable.
\section{Introduction}

	The Elephant 2000 programming language involves two main ideas.

	1. The language statements include {\it performatives} like
asking questions, making requests, answering questions, making
promises, making pronouncements that create facts (analogous to a
minister pronouncing two people man and wife) and making internal
commitments.
  Giving a passenger a reservation is a
performative of applied interest.

	Speech acts, otherwise known as performatives, have been
studied by philosophers, (Austin 1955) and (Searle 1969), and by
linguists.  They pointed out that they cannot be regarded as
statements that can be true or false.  Their ``happy'' use
involves various conditions such as sincerity, veracity and
fulfillment.  In our programming use, it makes sense to ask
whether the program answers questions truthfully and carries out
its commitments.  This is an intrinsic notion of correctness, i.e.
it doesn't involve external specifications.  It also makes
programs more modular and easier to understand.

	Naturally, the philosophers' and linguists' conditions for
performatives require revision for programming language purposes.

	2. Expressions of the Elephant 2000 language can refer to
the entire past.  This feature means that data structures need
not be present programs.  For this the program ``parses the
past'', i.e. uses pattern matching on the past to give values to
local variables.
by referring to these structures.

\section{Parsing the Past}

	The present form of this section is even more tentative
than the rest of the paper.

	Part of the objective of Elephant 2000 is to make programs
revisable with as little reading as possible.  Consider the following
analogy.  The boss of the airline can tell his underlings: don't
seat Iranians next to Iraqis; they might fight.  He can give this
order with no knowledge of how the reservation system works.  However,
the underlings will have to understand the system and its data
structures quite thoroughly in order to carry out the order.
Our goal is to make it possible to modify the program without knowing
about its data structures.  This can be done most easily if there
aren't any data structures used in the program itself.  All data
structures, except constant ones, are derived from the past action
of the program.  Therefore, if the program can refer to past events
in a sufficiently powerful way, it doesn't need data structures.

	There are several ways that have been proposed for referring
to the past.  These include temporal modal logic and quantificational
formulas with time as a variable.  The latter is more powerful than
the former, but apparently not powerful enough for all purposes.  The
ability to quantify over finite sets of times or lists of times or
LISP-type data structures including times as leaves seems powerful
enough.

	However, our tentative proposal for Elephant 2000 is to regard
the past somewhat as though it were a string and assign values to
bound variables by pattern matching.

\noindent Mathematical Digression on Matching Patterns to Continuous Objects

	If we consider the past as a function of a real variable
time, then there aren't atomic events.  Therefore, matching the
past requires a generalization of the concepts involved in
matching strings.  However, it may not be much more difficult.  A
non-terminal that can match a sequence can be generalized to a
non-terminal that can match an interval.  Perhaps we need to
distinguish between matching open and closed intervals.
\section{Philosphers' Discussion of Performatives}

Austin, p.163
``To sum up, we may say that the verdictive is an exercise of
judgment, the exercitive is an assertion of influence or
exercising of power, the commissive is an assuming of an
obligation or declaring an intention, the behabitive is
the adopting of an attitude, and the expositive is the
clarifying of reasons, arguments, and communications.''
old introduction:

	This article discusses Elephant 2000, a proposed
programming language for programs that interact with people,
other programs and the physical world.  Transaction processing
including expert systems that process transactions would probably
be the major potential application.  The ideas are incomplete, so
the language may take some time to specify let alone implement.
We employ the concept of {\it speech act} or {\it performative}
statement that has been studied by philosophers and linguists.
However, we modify their notions to suit our purposes.

	The main idea is expressed most directly when we
consider a program's interaction with people.  From the point
of view of the person, he has requests and questions and
in return he expects answers, fulfillment of his requests
or commitments to fulfill them later, and he is prepared
for additional questions which he will answer.   Suppose
the user and the program interact using a terminal on which
both type ASCII strings.  If the program is written in any
of the usual programming languages, it is programmed to use
these strings, but they have no intrinsic meaning.  Only
the programmer has in mind what the strings signify.

	Both the inputs and outputs to Elephant 2000 programs
are meaningful.  Some of this meaning is peculiar to the specific
program, but Elephant itself distinguishes requests, questions, offers,
answers and statements of fact.  Its outputs include these categories
and also statements of commitment analogous to promises.  Thus
when a program issues an airline reservation it is making a qualified
commitment to let the passenger board the airplane.

	Programs that interact with other programs also benefit
from meaningful communication.

	The expected advantage of Elephant 2000 over other
programming languages is that programs will be simpler to write
and that some of the specifications, e.g. that it keep its
promises and answer truthfully are intrinsic.  In this respect
they resemble the syntactic requirement that the program be
grammatical.  Writers, readers and modifiers of programs will
find Elephant programs more intelligible.

	Philosophers, e.g. J. L. Austin (1955) and John Searle (1969)
have studied speech acts.  They point out how speech acts differ from
simple declarative sentences and they classify some of the kinds of
speech acts and characterize their properties.  Some of what they
do is useful to us, but much of it isn't.  I also think that looking
at speech acts from the {\it design standpoint}, (Dennett 1971),
clarifies the philosophical problems.

	These intrinsic notions of correctness of Elephant programs
correspond to Austin's notion of a ``happy'' utterance of a speech
act.  It also corresponds to our slogan that an elephant is faithful,
so we'll call a program faithful if it satisfies the intrinsic notions
of correctness.

	The other main feature of Elephant 2000 is the way it handles
memory.  Instead of expressing the storage and retrieval of information,
Elephant programs refer directly to past events, e.g. inputs and outputs.
The programmer can say that a passenger has a reservation if he has
made one and not cancelled it.  Interpreted Elephant keeps a history
list of past events, while Elephant compilers generate appropriate
data structures so that the object programs can make their decisions
and computations.
general ways than just answering queries and making updates.
\subsection{Promises}

	Searle 

1. {\it Normal input and output conditions obtain.}


ideas:

1. The logical form of Elephant is to be a collection of sentences
of logic.  The properties of the program are to follow from the
program itself, the axioms of the domain and possibly from some
general Elephant axioms.

2. This logical form may consist of addtions to history associated
with each event.  The state associated with a history is what is
true when the events in the history have occurred.  Reduced states
include the information necessary to continue to execute the
program.

3. Here is the distinction between illocutionary and perlocutionary
that we will use---and it's sharp.  Correctness of a program
performing only illocutionary acts is defined in terms of the
programs inputs and outputs and perhaps its internal state.
The correctness of programs performing perlocutionary acts
is defined in terms of the state of the world.

4. We may speak of giving illocutionary specifications and perlocutionary
specifications.  Usually when we give illocutionary specifications,
we have something perlocutionary in mind, but we keep it informal.

5. Include requests for permission and giving permission.

old VAL suggestion, get logic formula corresponding to
correct behavior of a program that commits to set x←1
when user requests the value of x.


	We can compare Elephant with ``programming in English''.  
COBOL translates 1960 understanding of programming into English.
That wasn't hard.  

Sept 13
	VAL suggests

for i = 1 to 10 do read(x);
for j = 10 downto 1 do write (x when i was j)

Also
	n:=0; x:=1;
	for n=1 to 10 do x:=(x when n was n-1)*n

for factorial.

The best example: Read a travel diary and find out when the owner visited
Paris last time.

	while not eof read(year,city);
	write(year when city was Paris)

	RPG suggests that speech acts may refer to the past.

more VAL
The following program reads a string of left and right parentheses
and then points to the last unmatched left parenthesis (I think).

	i:=0; depth:=0;
	repeat
	    i:=i+1;
	    read(x);
	    if x='(' then depth:=depth+1
		     else depth:=depth-1
	until eof;
	write(i when (depth was depth) and not (i was i))

(That is, write the value that i had when depth was the same as now last time).

sep 19
	In order to specify the correctness of the reservation program,
we have to put in a restriction that more people can't be admitted to
an airplane than its capacity allows.  Otherwise, we can prove correct
a program that gives a reservation to anyone who asks and lets everyone
on the airplane.  This seems to involve a formalizaton of ``can''.
How can we avoid having to do this?

oct 10
	Discuss negotiation.

Oct 22
	The reservation program has a perlocutionary specification
of actually letting the passenger on the airplane.  This can be
done only if the airplane is not full.  The formulation should
be as follows.

1. Impose the illocutionary restriction that the program should
not order that the passenger be let on the airplane if the
airplane is full.

2. Prove that the program meets this illocutionary restriction.

3. Assume that the order to let the passenger on the airplane
will be obeyed if the airplane is not full.

Now prove that the program will meet its perlocutionary specification.

Mar 7
	There are also perlocutionary states of authorization and
obligation and generalized accounts receivable.

mps
Please
Send a copy of "Networks considered harmful" and also
Elephant draft to Nafeh

Geanakoplos met at TARK 3
mps
Please
send a copy of Measures of the value of information to
Prof. John Geanakoplos
Dept. of Economics
Yale University
New Haven, CT 06520
with the following note
Dear John
	Here's the paper I mentioned.  My more general ideas on
contracts are not written up at present.  The key idea is that
the ability to write contracts properly motivating each party
depends on each party knowing something about the other's
business and costs of doing his part.

	As I remarked, some people might suffer the theological
disappointment of expecting infinite reward and getting only
a finite reward but with infinite expected value.  It occurs to
me that if the successive recipients would co-operate and promise to share,
each could get infinite reward after all, assuming that the procession
of saints (or pardoned sinners) continued indefinitely and that
the reward was storable and transferable.  It isn't
obvious how fast they could afford to increase their ccnsumption
of reward.

Sincerely,
\section{Elephant Programs as Sentences of Logic}

 $$(∀psgr flt t) (input t = request make-commitement admit(psgr flt)
	∧ ¬ holds(full flt,t)
⊃ (∃ seat) (arises(t, commitment admit(psgr,flt, seat)
	∧ outputs(t,promise admit(psgr,flt,seat)))))$$

Note the use of $admit$ first with two argments and later with three.
We suppose that admit actually has a large set of arguments with
default values.  When an $admit(\ldots )$ expression is encountered,
it is made obvious which arguments are being given values by the
expression and which get default values.  The signals for this are
the name of the variable or an actual named argment as in Common
Lisp.  The compiler will have to come up with a way of assigning
a seat.  Maybe we have to say more so that the seat will be selected
from those available.

$$(∀psgr flt t) (input t = query exists commitment admit(psgr flt)
⊃ outputs(t,if exists(t, commitent admit(psgr flt)) then
confirm commitment admit(psgr flt seat) else deny input t)$$

$$(∀psgr flt t) (input t = request cancel commitment admit(psgr,flt)
⊃ revoke(t,commitment admit(psgr,flt))$$

exists(t,commmitment x)) ≡ (∃t')(t' < t ∧ arises(t commitment x)) ∧
(∀t'')(t' < t'' < t ⊃ ¬revokes(t, commitment x)))$$

(∀psgr flt t) (input t = request admit(psgr,flt) ⊃
if exists(t, commitment admit( psgr,psgr,flt) then outputs(t,
command(agent,admit(psgr,flt,seat)) else outputs(t, command(agent,
don't admit (psgr,flt))))$$
% temporary only Elephant programs in logic
\vfill\eject
\section{Elephant Programs Expressed in Logic}

	With Algol 48 and Algol 50 as examples, we can now consider
the logical form of Elephant 2000 programs.
Here is the first of several possible forms.

	As with Algol 50, we use a state vector $\xi(t)$ for
the state of the program during operation.  The program is
expessed by a formula for $\xi(t+1)$.  Since the program
interacts with the world, we also have a state vector $world(t)$
of the world external to the computer.  Because we can't have
complete knowledge of the world, we can't expect to
describe $world(t+1)$ by a formula, but proving perlocutionary
specifications will involve assumptions about the functions
that determine how $world(t)$ changes.

	We have
%
$$\xi(t+1) = update(input world(t),\xi,t)$$
%
and
%
$$world(t+1) = worldf(output \xi(t),world,t).$$
%
Notice that we have written $\xi$ and $world$ on the right sides
of the two equations rather than $\xi(t)$ and $world(t)$, which
might have been expected.  This allows us to let $\xi(t+1)$ and
$world(t+1)$ depend on the whole past rather than just the present.
(It would also allow equations expressing dependence on the future,
but since we won't write such equations, we don't have to try to
make sense of that possibility).

	For the reservation program we might have
%
% val doubts the need for q=
$$\eqalign{update(i,\xi,t) = &qif i q= qrequest qmake qcommitment admit(psgr,flt)
∧ ¬full qc1(flt,\xi(t))\cr
% qc1 is a contents function
\quad qthen &qadjoin(\xi(t),qcommitment admit1(psgr,flt,seatf(\xi(t))),\cr
&qoutput(assert grantrequest admit1(psgr,flt,seatf(\xi(t)))))\cr
% what to do if the flight is full
%
qelseif& i q= qquery qexists qcommitment admit(psgr,flt)\cr
	&qthen (qlet ((x (search commitment admit1(psgr,flt,?seat)))\cr
	(if& (null x) (qadjoin (\xi t) NO) \cr
% VAL suggests that it is better to put a universal quantifier on the outside
}.$$
% What about a general form of query and of assertion?
% what about some answers or all answers?
% what about two pass translation to logical sentence via lisp program
% with history data structure, e.g. like Algol 50

% illocutionary = behavioral
% perlocutionary = accomplishment
% VAL suggests \xi(t+1) = update(i(t),\xi,t)
% and i(t) = input world(t)
% in order to write specifications and proofs for behavior w/o mentioning world

% prove correctness of simplified program with just one statement
\vfill\eject
Notes on Searle and Austin and Grice

25 - Searle credits Austin with perlocutionary.
	``One only refers as part of the performance of an illocutionary
act''.
quote from Frege: ``Nur im Zusammenhang eines Satzes bedeuten die
Worter etwas.''

I don't think we need be bound by this.  For the benefit of our programs
we can create a database of words and longer expressions with the
intent that some of them can have definite references for later use
in sentences, internal and external, without an initial commitment
to any particular sentence.  This database can then be extended
by observations of the program.

Because Searle and other philosophers don't have the notion of
approximate theory, they are excessively conservative about
what is meaningful.  For example, ``Searle's dog'' has a meaning
independent of whether he has one, and a reference if he has exactly
one.

Grice
He lives somewhere in the South of France.  Implies that more
info isn't available.

She made sounds resembling those ordinarily produced in singing
Home Sweet Home.  Implies she didn't sing it well.

I am out of gas.  There's a gas station around the corner.
Implies there's no reason why it isn't usable.

Smith doesn't seem to have a girl friend.
He makes a lot of trips to New York.
Implies the possibility of a girl friend in NY.

Fred Pryor, Hoover, 5-3738

Leora suggestions:

reference to future situations - perhaps in the remarks section

p. 5 not clear here what she means

p. 6 Don't see how to give more justification now.

p8,9 illegible comment

12 ref to Morgenstern Thesis

 \section{What Good is Elephant?}

In its present stage Elephant is primarily a research
exploration of the characteristics and possibilities of
a programming language including the features of
I-O based on speech acts, direct reference to the past,
intrinsic correctness criteria, representation of programs
by sentences expressing operational semantics and distinguishing
illocutionary and perlocutionary specifications.  Nevertheless,
the speech act part has obvious applications.

There is a widespread opinion that the productivity improvements
expected from computerization of service and bureaucratic
functions have been slow to be realized.  Our opinion is
that a major reason is that many of the service and
bureaucratic functions involve interaction among independent
organizations and individuals in such an essential way that
large increases in productivity depend on computerizing these
interactions in a high-level way.

Computerization of interaction is frustrated by the low
level at which inputs and outputs are specified.  Strings
are specified rather than facts, requests, etc.  Even worse
many programs, e.g. airline and other reservation systems
specify their outputs in terms of display updates.  The
fashion for icons may be making matters even worse.

The ideas of Elephant and its companion project CBCL offer a way out.
Oct 17 notes

levels of intentionality
	Some of the speech acts and corresponding correctness conditions
do not require ascribing beliefs and other mental qualities to the
program.  Others do.

insert What good is elephant

finish Leora comments

get rid of redundancies

Illocutionary and perlocutionary accounts of observations:
	The most straightforward treatment of observations is
with a statement (setf y (observe x)) where  x  is something like
``airplanes in the traffic pattern''.  This then causes
the variable  y  to be set to some complex object.  Illocutionary
specifications can be written in terms of the value  y  gets.
The most straightforward perlocutionary assumption is that  y
correctly represents  x.  However, we might have a more complex
statement  (setf y (determine x)), where  (determine x) may be
some complex procedure that asks questions and evaluates the
reliability and even honesty of answers.

Lisp syntax for Elephant

Nov 7

most recent
mr(x, ...

exit(
go(1+loc(mr λe. entry-event e ∧ subr=foo ∧ exited e)))

Suppose executions take the form

∃e. e ε h ∧ time e = now ∧

\section{Levels of Intentionality}

	Consider the specification that the program be
 {\it sincere}, i.e. that it only assert what it believes.
We could also ask that it be {\it informative}, i.e. if it is
asked a question and it believes an answer, then it should
give that answer.

	We want to be able to verify that some programs
are {\it sincere} and {\it informative} without having to
solve all the philosophical problems concerning belief, i.e.
without a general definition of belief.

	In some cases, this should be easy.  For example, suppose
the programs sole resource is a database of employees of a
company.  We may then write an axiom that its beliefs are
precisely the atomic sentences asserting that the persons
mentioned in the database are employees together with an
axiom asserting that it believes that these are all the
emloyees.  Then the appropriate form of sincerity and
informativeness is that it should answer yes when asked
whether a person is an employee if the person is listed
in the database and no if he isn't listed.  To all other
questions it should answer that it doesn't know.  A proof
of sincerity is a proof that it behaves in this way.
Requests for copies
, MSG.MSG[1,JMC]/67P/12L, from Nils saying CIFE might be interested
, MSG.MSG[1,JMC]/27P/24L
, MSG.MSG[1,JMC]/107P/1L
, MSG.MSG[1,JMC]/126P/1L
, MSG.MSG[1,JMC]/127P/23L
, MSG.MSG[1,JMC]/132P/1L
, MSG.MSG[1,JMC]/135P/25L
, MSG.MSG[1,JMC]/108P/13L, from Terry
 , MSG.MSG[1,JMC]/159P/1L
 , MSG.MSG[1,JMC]/187P/19L
 , MSG.MSG[1,JMC]/191P/28L
 , MSG.MSG[1,JMC]/192P/21L, csd
 , MSG.MSG[1,JMC]/196P/26L
 , MSG.MSG[1,JMC]/218P/1L
 , MSG.MSG[1,JMC]/225P/15L
final version to Ito
 , MSG.MSG[1,JMC]/235P/16L
 , MSG.MSG[1,JMC]/258P/28L
 , MSG.MSG[1,JMC]/272P/16L Beeson
 , Furukawa
 , MSG.MSG[1,JMC]/145P/1L Muninder Singh
tasks:

check for version to send to Searle
perlocutionary specs of reservation program
one program as a sentence of logic
correctness spec for one program
discuss authority
promises for future drafts
relation between promises and internal commitments
tell about reservations
a commitment is a dynamic specification

, IDEAS[1,JMC]/2P/861L